Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition

نویسندگان

چکیده

The emotional changes in facial micro-expressions are combinations of action units. researchers have revealed that units can be used as additional auxiliary data to improve micro-expression recognition. Most the attempt fuse image features and unit information. However, these works ignore impact on feature extraction process. Therefore, this paper proposes a local detail enhancement model based multimodal dynamic attention fusion network (MADFN) method for This uses masked autoencoder learnable class tokens remove areas with low expression ability images. Then, we utilize module representation potential features. state-of-the-art performance our proposed is evaluated verified SMIC, CASME II, SAMM, their combined 3DB-Combined datasets. experimental results demonstrated achieved competitive accuracy rates 81.71%, 82.11%, 77.21% SAMM datasets, respectively, show MADFN help discrimination

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multimodal learning for facial expression recognition

In this paper, multimodal learning for facial expression recognition (FER) is proposed. The multimodal learning method makes the first attempt to learn the joint representation by considering the texture and landmark modality of facial images, which are complementary with each other. In order to learn the representation of each modality and the correlation and interaction between different moda...

متن کامل

Objective Classes for Micro-Facial Expression Recognition

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Acti...

متن کامل

Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion sta...

متن کامل

Facial Expression Recognition Based on Structural Changes in Facial Skin

Facial expressions are the most powerful and direct means of presenting human emotions and feelings and offer a window into a persons’ state of mind. In recent years, the study of facial expression and recognition has gained prominence; as industry and services are keen on expanding on the potential advantages of facial recognition technology. As machine vision and artificial intelligence advan...

متن کامل

Fusion Based FastICA Method: Facial Expression Recognition

With the continuous progress of human computer interaction, face detection as well as facial expression recognition is gaining the attention of researchers from the fields of security, psychology, image processing, and computer vision. In this area the most challenging thing is to recognize accurate facial expression with minimum time requirement. In this work, our main focus is to minimize the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Entropy

سال: 2023

ISSN: ['1099-4300']

DOI: https://doi.org/10.3390/e25091246